homework 6, version 0
Submission by: Jazzy Doe (jazz@mit.edu)
Oopsie! You need to update Pluto to the latest version
Close Pluto, go to the REPL, and type:
julia> import Pkg
julia> Pkg.update("Pluto")
Homework 6: Epidemic modeling III
18.S191
, fall 2020
This notebook contains built-in, live answer checks! In some exercises you will see a coloured box, which runs a test case on your code, and provides feedback based on the result. Simply edit the code, run it, and the check runs again.
For MIT students: there will also be some additional (secret) test cases that will be run as part of the grading process, and we will look at your notebook and write comments.
Feel free to ask questions!
"Jazzy Doe"
"jazz"
Let's create a package environment:
We began this module with data on the COVID-19 epidemic, but then looked at mathematical models. How can we make the connection between data and models?
Models have parameters, such as the rate of recovery from infection. Where do the parameter values come from? Ideally we would like to extract them from data. The goal of this homework is to do this by fitting a model to data.
For simplicity we will use data that generated from the spatial model in Homework 5, rather than real-world data, and we will fit the simplest SIR model. But the same ideas apply more generally.
There are many ways to fit a function to data, but all must involve some form of optimization, usually minimization of a particular function, a loss function; this is the basis of the vast field of machine learning.
The loss function is a function of the model parameters; it measures how far the model output is from the data, for the given values of the parameters.
We emphasise that this material is pedagogical; there is no suggestion that these specific techniques should be used actual calculations; rather, it is the underlying ideas that are important.
Exercise 1: Calculus without calculus
Before we jump in to simulating the SIR equations, let's experiment with a simple 1D function. In calculus, we learn techniques for differentiating and integrating symbolic equations, e.g.
Instead, we use ✨ computers ✨ to approximate derivatives and integrals. Instead of applying rules to symbolic expressions, we use much simpler strategies that only use the output values of our function.
As a first example, we will approximate the derivative of a function. Our method is inspired by the analytical definition of the derivative!
The finite difference method simply fixes a small value for
Exercise 1.1 - tangent line
👉 Write a function finite_difference_slope
that takes a function f
and numbers a
and h
. It returns the slope
finite_difference_slope (generic function with 2 methods)
0.2
Got it!
Well done!
👉 Write a function tangent_line
that takes the same arguments f
, a
and g
, but it returns a function. This function (finite_difference_slope
) that passes through
Hint
Remember that functions are objects! For example, here is a function that returns the square root function:
function the_square_root_function()
f = x -> sqrt(x)
return f
end
tangent_line (generic function with 1 method)
Got it!
Splendid!
The slider below controls
Notice that, as you decrease
0.31622776601683794
Exercise 1.2 - antiderivative
In the finite differences method, we approximated the derivative of a function:
We can do something very similar to approximate the 'antiderivate' of a function. Finding the antiderivative means that we use the slope
This antiderivative problem is illustrated below. The only information that we have is the slope at any point
Using only this information, we want to reconstruct
By rearranging the equation above, we get the Euler method:
Using this formula, we only need to know the value
👉 Write a function euler_integrate_step
that applies this formula to a known function
euler_integrate_step (generic function with 1 method)
Got it!
Fantastic!
👉 Write a function euler_integrate
that takes takes a known function T
with a == first(T)
and h == step(T)
. It applies the function euler_integrate_step
repeatedly, once per entry in T
, to produce the sequence of values
euler_integrate (generic function with 1 method)
Let's try it out on T
ranging from
We already know the analytical solution 0.0
to 1000.0
.
0.003
0.015
0.042
0.09
0.165
0.273
0.42
0.612
0.855
1.155
1.518
1.95
2.457
3.045
3.72
4.488
5.355
6.327
7.41
8.61
791.43
817.377
843.885
870.96
898.608
926.835
955.647
985.05
1015.05
1045.65
Got it!
Fantastic!
You see that our numerical antiderivate is not very accurate, but we can get a smaller error by choosing a smaller step size. Try it out!
There are also alternative integration methods that are more accurate with the same step size. Some methods also use the second derivative, other methods use multiple steps at once, etc.! This is the study of Numerical Methods.
Exercise 2: Simulating the SIR differential equations
Recall from the lectures that the ordinary differential equations (ODEs) for the SIR model are as follows:
where
We will use the simplest possible method to simulate these, namely the Euler method. The Euler method is not always a good method to solve ODEs accurately, but for our purposes it is good enough.
In the previous exercise, we introduced the euler method for a 1D function, which you can see as an ODE that only depends on time. For the SIR equations, we have an ODE that only depends on the previous value, not on time, and we have 3 equations instead of 1.
The solution is quite simple, we apply the euler method to each of the differential equations within a single time step to get new values for each of
👉 Implement a function euler_SIR_step(β, γ, sir_0, h)
that performs a single Euler step for these equations with the given parameter values and initial values, with a step size
sir_0
is a 3-element vector, and you should return a new 3-element vector with the values after the timestep.
euler_SIR_step (generic function with 1 method)
0.989901
0.010049
5.0e-5
👉 Implement a function euler_SIR(β, γ, sir_0, T)
that applies the previously defined function over a time range
You should return a vector of vectors: a 3-element vector for each point in time.
euler_SIR (generic function with 1 method)
0.0:0.1:60.0
0.989703
0.010147
0.00015
0.989402
0.0102961
0.000302205
0.989096
0.0104472
0.000456646
0.988786
0.0106005
0.000613355
0.988472
0.010756
0.000772363
0.988153
0.0109136
0.000933702
0.987829
0.0110734
0.00109741
0.987501
0.0112355
0.00126351
0.987168
0.0113998
0.00143204
0.986831
0.0115664
0.00160304
0.986488
0.0117353
0.00177653
0.986141
0.0119066
0.00195256
0.985789
0.0120802
0.00213116
0.985431
0.0122563
0.00231236
0.985069
0.0124348
0.00249621
0.984702
0.0126157
0.00268273
0.984329
0.0127992
0.00287197
0.983951
0.0129852
0.00306396
0.983568
0.0131737
0.00325873
0.983179
0.0133648
0.00345634
0.218585
0.0274236
0.753991
0.218405
0.0271921
0.754403
0.218227
0.0269624
0.75481
0.218051
0.0267344
0.755215
0.217876
0.0265083
0.755616
0.217703
0.0262839
0.756013
0.217531
0.0260614
0.756408
0.217361
0.0258405
0.756799
0.217192
0.0256214
0.757186
0.217025
0.025404
0.757571
Let's plot
plot_sir! (generic function with 1 method)
👉 Do you see an epidemic outbreak (i.e. a rapid growth in number of infected individuals, followed by a decline)? What happens after a long time? Does everybody get infected?
blabla
👉 Make an interactive visualization in which you vary
Exercise 3: Numerical gradient
For fitting we need optimization, and for optimization we will use derivatives (rates of change). In Exercise 1, we wrote a function finite_difference_slope(f, a)
to approximate
Exercise 3.1
👉 Write functions ∂x(f, a, b)
and ∂y(f, a, b)
that calculate the partial derivatives
Recall that
You should use anonymous functions for this. These have the form x -> x^2
, meaning "the function that sends
∂x (generic function with 1 method)
42.00699999999813
Got it!
Awesome!
∂y (generic function with 1 method)
1.0000000000047748
Got it!
Great! 🎉
Exercise 3.2
👉 Write a function gradient(f, a, b)
that calculates the gradient of a function
gradient (generic function with 1 method)
42.007
1.0
Got it!
Good job!
Exercise 4: Minimisation using gradient descent
In this exercise we will use gradient descent to find local minima of (smooth enough) functions.
To do so we will think of a function as a hill. To find a minimum we should "roll down the hill".
Exercise 4.1
We want to minimize a 1D function, i.e. a function
👉 Write a function gradient_descent_1d_step(f, x0)
that performs a single gradient descrent step, from the point x0
and using your function finite_difference_slope
to approximate the derivative. The result should be the next guess for
gradient_descent_1d_step (generic function with 1 method)
4.899989999999974
Got it!
You got the right answer!
gradient_1d_viz (generic function with 1 method)
👉 Write a function gradient_descent_1d(f, x0)
that repeatedly applies the previous function (N_steps
times), starting from the point x0
, like in the vizualisation above. The result should be the final guess for
gradient_descent_1d (generic function with 1 method)
4.999499991586009
Got it!
Splendid!
Right now we take a fixed number of steps, even if the minimum is found quickly. What would be a better way to decide when to end the function?
blabla
Exericse 4.2
Multivariable calculus tells us that the gradient
👉 Write functions gradient_descent_2d_step(f, x0, y0)
and gradient_descent_2d(f, x0, y0)
that do the same for functions
gradient_descent_2d_step (generic function with 1 method)
gradient_descent_2d (generic function with 1 method)
2.99957
1.99976
himmelbau (generic function with 1 method)
We also prepared a 3D visualisation if you like! It's a bit slow...
false
gradient_2d_viz_3d (generic function with 1 method)
gradient_2d_viz_2d (generic function with 1 method)
👉 Can you find different minima?
Exercise 5: Learning parameter values
In this exercise we will apply gradient descent to fit a simple function
To do so we need to define what "best" means. We will define a measure of the distance between the function and the data, given by a loss function, which itself depends on the values of
The iterative procedure by which we gradually adjust the parameter values to improve the loss function is often called machine learning or just learning, since the computer is "discovering" information in a gradual way, which is supposed to remind us of how humans learn. [Hint: This is not how humans learn.]
Exercise 5.1 - 🎲 frequencies
We generate a small dataset by throwing 10 dice, and counting the sum. We repeat this experiment many times, giving us a frequency distribution in a familiar shape.
dice_frequencies (generic function with 1 method)
10:60
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0003
0.0004
0.0
Let's try to fit a gaussian (normal) distribution. Its PDF with mean
👉 (Not graded) Manually fit a Gaussian distribution to our data by adjusting
μ =
σ =
Show manual fit:
gauss (generic function with 1 method)
What we just did was adjusting the function parameters until we found the best possible fit. Let's automate this process! To do so, we need to quantify how good or bad a fit is.
👉 Define a loss function to measure the "distance" between the actual data and the function. It will depend on the values of
loss_dice (generic function with 1 method)
false
👉 Use your gradient_descent_2d
function to find a local minimum of found_μ
and found_σ
.
35.0212
5.52404
Go back to the graph to see your optimized gaussian curve!
If your fit is close, then probability theory tells us that the found parameter
0.01867851401522813
0.09295362937137064
Got it!
Well done!
35.00255
5.43109045197188
Exercise 6: Putting it all together — fitting an SIR model to data
In this exercise we will fit the (non-spatial) SIR ODE model from Exercise 1 to some data generated from the spatial model in Problem Set 4. If we are able to find a good fit, that would suggest that the spatial aspect "does not matter" too much for the dynamics of these models. If the fit is not so good, perhaps there is an important effect of space. (As usual in statistics, and indeed in modelling in general, we should be very cautious of making claims of this nature.)
This fitting procedure will be different from that in Exercise 4, however: we no longer have an explicit form for the function that we are fitting – rather, it is the output of an ODE! So what should we do?
We will try to find the parameters
Exercise 6.1
Below the result from Homework 4, Exercise 3.2. These are the average S, I, R fractions of running 20 simulations. Click on it!
1:1000
0.99
0.01
0.0
0.9895
0.0105
0.0
0.9895
0.0105
0.0
0.989
0.011
0.0
0.9885
0.0115
0.0
0.9885
0.0115
0.0
0.9885
0.0115
0.0
0.988
0.0115
0.0005
0.9865
0.013
0.0005
0.986
0.0135
0.0005
0.9855
0.014
0.0005
0.9855
0.014
0.0005
0.9855
0.014
0.0005
0.9845
0.015
0.0005
0.9845
0.015
0.0005
0.9845
0.015
0.0005
0.984
0.0155
0.0005
0.984
0.0155
0.0005
0.984
0.0155
0.0005
0.9835
0.016
0.0005
0.05
0.2335
0.7165
0.05
0.2335
0.7165
0.05
0.233
0.717
0.05
0.233
0.717
0.05
0.2325
0.7175
0.05
0.232
0.718
0.05
0.2315
0.7185
0.05
0.231
0.719
0.05
0.2295
0.7205
0.05
0.2285
0.7215
👉 (Not graded) Manually fit the SIR curves to our data by adjusting
β =
γ =
Show manual fit:
👉 To do this automatically, we will again need to define a loss function
This time, instead of comparing two vectors of numbers, we need to compare two vectors of vectors, the S, I, R values.
loss_sir (generic function with 1 method)
350.1798977340281
👉 Use this loss function to find the optimal parameters
0.0185516
0.00162125
Got it!
If you made it this far, congratulations – you have just taken your first step into the exciting world of scientific machine learning!
Before you submit
Remember to fill in your name and Kerberos ID at the top of this notebook.
cannot assign a value to variable PlutoUI.as_svg from module workspace#3
Here is what happened, the most recent locations are first:
- from run_expression.jl:92
- evalfrom boot.jl:373
as_mime (generic function with 1 method)
Function library
Just some helper functions used in the notebook.
hint (generic function with 1 method)
almost (generic function with 1 method)
still_missing (generic function with 2 methods)
keep_working (generic function with 2 methods)
Fantastic!
Splendid!
Great!
Yay ❤
Great! 🎉
Well done!
Keep it up!
Good job!
Awesome!
You got the right answer!
Let's move on to the next section.
correct (generic function with 2 methods)
not_defined (generic function with 1 method)